DTE AICCOMAS 2025

Finite Element Neural Network Interpolation: - Interpretable and adaptive discretisation for solving PDEs

  • Skardova, Katerina (École Polytechnique)
  • Daby-Seesaram, Alexandre (École Polytechnique)
  • Genet, Martin (École Polytechnique)

Please login to view abstract download link

In this contribution, we introduce the Finite Element Neural Network Interpolation (FENNI) framework [1,2], built upon the concept of PINNs [3] and HiDeNN [4]. Similar to classical Physics-Informed Neural Networks (PINNs), FENNI incorporates knowledge of the underlying physics through appropriate formulation of the loss function based on the PDE residual. Similar to HiDeNN, FENNI defines the neural network approximation using interpretable adaptive finite element shape functions. A key advantage of FENNI is its architecture, which significantly reduces the number of trainable parameters compared to fully connected neural networks. This architecture also leads to enhanced interpretability, as the individual weights and biases correspond to specific physical quantities. This property allows FENNI to handle PDEs on fixed, r-adaptive, and rh-adaptive discretizations without modifications to the model architecture. Moreover, the interpretability facilitates the strong imposition of Dirichlet boundary conditions, avoiding common difficulties associated with balancing additional penalty terms in the loss function. The framework also supports efficient transfer learning through simple mesh refinement, further broadening its applicability. We demonstrate the capabilities of FENNI through 1D and 2D test cases, exploring several combinations of physics-informed loss functions, optimizers, and training strategies, and we discuss the strengths and limitations of these variants. REFERENCES [1] Skardova, K., Daby-Seesaram, A., & Genet, M. Finite Element Neural Network Interpolation: Interpretable Neural Networks for Solving PDEs. Submitted. [2] Daby-Seesaram, A., Skardova, K., & Genet, M. Finite Element Neural Network Interpolation: Hybridisation with the Proper Generalised Decomposition for surrogate modelling. Submitted. [3] Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707. [4] Zhang, L., Cheng, L., Li, H., Gao, J., Yu, C., Domel, R., Yang, Y., Tang, S., and Liu, W. K. (2021). Hierarchical deep-learning neural networks: finite elements and beyond. Computational Mechanics, 67:207–230.